892 research outputs found
A Scalable Algorithm For Sparse Portfolio Selection
The sparse portfolio selection problem is one of the most famous and
frequently-studied problems in the optimization and financial economics
literatures. In a universe of risky assets, the goal is to construct a
portfolio with maximal expected return and minimum variance, subject to an
upper bound on the number of positions, linear inequalities and minimum
investment constraints. Existing certifiably optimal approaches to this problem
do not converge within a practical amount of time at real world problem sizes
with more than 400 securities. In this paper, we propose a more scalable
approach. By imposing a ridge regularization term, we reformulate the problem
as a convex binary optimization problem, which is solvable via an efficient
outer-approximation procedure. We propose various techniques for improving the
performance of the procedure, including a heuristic which supplies high-quality
warm-starts, a preprocessing technique for decreasing the gap at the root node,
and an analytic technique for strengthening our cuts. We also study the
problem's Boolean relaxation, establish that it is second-order-cone
representable, and supply a sufficient condition for its tightness. In
numerical experiments, we establish that the outer-approximation procedure
gives rise to dramatic speedups for sparse portfolio selection problems.Comment: Submitted to INFORMS Journal on Computin
HIT and brain reward function: a case of mistaken identity (theory)
This paper employs a case study from the history of neuroscienceâbrain reward functionâto scrutinize the inductive argument for the so-called âHeuristic Identity Theoryâ (HIT). The case fails to support HIT, illustrating why other case studies previously thought to provide empirical support for HIT also fold under scrutiny. After distinguishing two different ways of understanding the types of identity claims presupposed by HIT and considering other conceptual problems, we conclude that HIT is not an alternative to the traditional identity theory so much as a relabeling of previously discussed strategies for mechanistic discovery
Truth, Ramsification, and the Pluralist's Revenge
Functionalists about truth employ Ramsification to produce an implicit definition of the theoretical term _true_, but doing so requires determining that the theory introducing that term is itself true. A variety of putative dissolutions to this problem of epistemic circularity are shown to be unsatisfactory. One solution is offered on functionalists' behalf, though it has the upshot that they must tread on their anti-pluralist commitment
Embodied Cognition: Grounded Until Further Notice?
Embodied Cognition is the kind of view that is all trees, no forest. Mounting experimental evidence gives it momentum in fleshing out the theoretical problems inherent in Cognitivistsâ separation of mind and body. But the more its proponents compile such evidence, the more the fundamental concepts of Embodied Cognition remain in the dark. This conundrum is nicely exemplified by Pecher and Zwaanâs book, Grounding Cognition, which is a programmatic attempt to rally together an array of empirical results and linguistic data, and its successes in this endeavor nicely epitomize current directions among the various research provinces of Embodied Cognition. The untoward drawback, however, is that such successes are symptomatic of the growing imbalance between experimental progress and theoretical interrogation. In particular, one of the theoretical cornerstones of Embodied Cognition ânamely, the very concept of grounding under investigation hereâcontinues to go unilluminated. Hence, the advent of this volume indicates thatânow more than everâthe concept of grounding is in dire need of some plain old-fashioned conceptual analysis. In that sense, Embodied Cognition is grounded until further notice
First principles in the life sciences: The free-energy principle, organicism, and mechanism
The free-energy principle claims that biological systems behave adaptively maintaining their physical integrity only if they minimize the free energy of their sensory states. Originally proposed to account for perception, learning, and action, the free-energy principle has been applied to the evolution, development, morphology, and function of the brain, and has been called a âpostulate,â a âmandatory principle,â and an âimperative.â While it might afford a theoretical foundation for understanding the complex relationship between physical environment, life, and mind, its epistemic status and scope are unclear. Also unclear is how the free-energy principle relates to prominent theoretical approaches to life science phenomena, such as organicism and mechanicism. This paper clarifies both issues, and identifies limits and prospects for the free-energy principle as a first principle in the life sciences
Ontic explanation is either ontic or explanatory, buth not both
This paper advances three related arguments showing that the ontic conception of explanation (OC), which is often adverted to in the mechanistic literature, is inferentially and conceptually incapacitated, and in ways that square poorly with scientific practice. Firstly, the main argument that would speak in favor of OC is invalid, and faces several objections. Secondly, OC's superimposition of ontic explanation and singular causation leaves it unable to accommodate scientifically important explanations. Finally, attempts to salvage OC by reframing it in terms of 'ontic constraints' just concedes the debate to the epistemic conception of explanation. Together, these arguments indicate that the epistemic conception is more or less the only game in town
Sparse PCA With Multiple Components
Sparse Principal Component Analysis (sPCA) is a cardinal technique for
obtaining combinations of features, or principal components (PCs), that explain
the variance of high-dimensional datasets in an interpretable manner. This
involves solving a sparsity and orthogonality constrained convex maximization
problem, which is extremely computationally challenging. Most existing works
address sparse PCA via methods-such as iteratively computing one sparse PC and
deflating the covariance matrix-that do not guarantee the orthogonality, let
alone the optimality, of the resulting solution when we seek multiple mutually
orthogonal PCs. We challenge this status by reformulating the orthogonality
conditions as rank constraints and optimizing over the sparsity and rank
constraints simultaneously. We design tight semidefinite relaxations to supply
high-quality upper bounds, which we strengthen via additional second-order cone
inequalities when each PC's individual sparsity is specified. Further, we
derive a combinatorial upper bound on the maximum amount of variance explained
as a function of the support. We exploit these relaxations and bounds to
propose exact methods and rounding mechanisms that, together, obtain solutions
with a bound gap on the order of 0%-15% for real-world datasets with p = 100s
or 1000s of features and r \in {2, 3} components. Numerically, our algorithms
match (and sometimes surpass) the best performing methods in terms of fraction
of variance explained and systematically return PCs that are sparse and
orthogonal. In contrast, we find that existing methods like deflation return
solutions that violate the orthogonality constraints, even when the data is
generated according to sparse orthogonal PCs. Altogether, our approach solves
sparse PCA problems with multiple components to certifiable (near) optimality
in a practically tractable fashion.Comment: Updated version with improved algorithmics and a new section
containing a generalization of the Gershgorin circle theorem; comments or
suggestions welcom
Scientific Representation
James Nguyen & Roman Frigg (2022). Scientific Representation. Cambridge University Press, 90pp., âŹ21.23 (Paperback), ISBN: 978100900915
- âŠ